Goto

Collaborating Authors

 free rider


WE economy: Potential of mutual aid distribution based on moral responsibility and risk vulnerability

Kato, Takeshi

arXiv.org Artificial Intelligence

Reducing wealth inequality and disparity is a global challenge. The economic system is mainly divided into (1) gift and reciprocity, (2) power and redistribution, (3) market exchange, and (4) mutual aid without reciprocal obligations. The current inequality stems from a capitalist economy consisting of (2) and (3). To sublimate (1), which is the human economy, to (4), the concept of a "mixbiotic society" has been proposed in the philosophical realm. This is a society in which free and diverse individuals, "I," mix with each other, recognize their respective "fundamental incapability" and sublimate them into "WE" solidarity. The economy in this society must have moral responsibility as a coadventurer and consideration for vulnerability to risk. Therefore, I focus on two factors of mind perception: moral responsibility and risk vulnerability, and propose a novel model of wealth distribution following an econophysical approach. Specifically, I developed a joint-venture model, a redistribution model in the joint-venture model, and a "WE economy" model. A simulation comparison of a combination of the joint ventures and redistribution with the WE economies reveals that WE economies are effective in reducing inequality and resilient in normalizing wealth distribution as advantages, and susceptible to free riders as disadvantages. However, this disadvantage can be compensated for by fostering consensus and fellowship, and by complementing it with joint ventures. This study essentially presents the effectiveness of moral responsibility, the complementarity between the WE economy and the joint economy, and the direction of the economy toward reducing inequality. Future challenges are to develop the WE economy model based on real economic analysis and psychology, as well as to promote WE economy fieldwork for worker coops and platform cooperatives to realize a desirable mixbiotic society.


Fair Federated Medical Image Segmentation via Client Contribution Estimation

Jiang, Meirui, Roth, Holger R, Li, Wenqi, Yang, Dong, Zhao, Can, Nath, Vishwesh, Xu, Daguang, Dou, Qi, Xu, Ziyue

arXiv.org Artificial Intelligence

How to ensure fairness is an important topic in federated learning (FL). Recent studies have investigated how to reward clients based on their contribution (collaboration fairness), and how to achieve uniformity of performance across clients (performance fairness). Despite achieving progress on either one, we argue that it is critical to consider them together, in order to engage and motivate more diverse clients joining FL to derive a high-quality global model. In this work, we propose a novel method to optimize both types of fairness simultaneously. Specifically, we propose to estimate client contribution in gradient and data space. In gradient space, we monitor the gradient direction differences of each client with respect to others. And in data space, we measure the prediction error on client data using an auxiliary model. Based on this contribution estimation, we propose a FL method, federated training via contribution estimation (FedCE), i.e., using estimation as global model aggregation weights. We have theoretically analyzed our method and empirically evaluated it on two real-world medical datasets. The effectiveness of our approach has been validated with significant performance improvements, better collaboration fairness, better performance fairness, and comprehensive analytical studies.


Free-riders in Federated Learning: Attacks and Defenses

Lin, Jierui, Du, Min, Liu, Jian

arXiv.org Machine Learning

Free-riders in Federated Learning: Attacks and Defenses Jierui Lin, Min Du, and Jian Liu University of California, Berkeley Abstract--Federated learning is a recently proposed paradigm that enables multiple clients to collaboratively train a joint model. It allows clients to train models locally, and leverages the parameter server to generate a global model by aggregating the locally submitted gradient updates at each round. Although the incentive model for federated learning has not been fully developed, it is supposed that participants are able to get rewards or the privilege to use the final global model, as a compensation for taking efforts to train the model. Therefore, a client who does not have any local data has the incentive to construct local gradient updates in order to deceive for rewards. In this paper, we are the first to propose the notion of free rider attacks, to explore possible ways that an attacker may construct gradient updates, without any local training data. Furthermore, we explore possible defenses that could detect the proposed attacks, and propose a new high dimensional detection method called STD-DAGMM, which particularly works well for anomaly detection of model parameters. We extend the attacks and defenses to consider more free riders as well as differential privacy, which sheds light on and calls for future research in this field. I NTRODUCTION F EDERA TED learning [1], [2], [3] has been proposed to facilitate a joint model training leveraging data from multiple clients, where the training process is coordinated by a parameter server. In the whole process, clients' data stay local, and only model parameters are communicated among clients through the parameter server. A typical training iteration works as follows. First, the parameter server sends the newest global model to each client. Then, each client locally updates the model using local data and reports updated gradients to the parameter server. Finally, the server performs model aggregation on all submitted local updates to form a new global model, which has better performance than models trained using any single client's data. Compared with an alternative approach which simply collects all data from the clients and trains a model on those data, federated learning is able to save the communication overhead by only transmitting model parameters, as well as protect privacy since all data stay local.


Quantifying the Burden of Exploration and the Unfairness of Free Riding

Jung, Christopher, Kannan, Sampath, Lutz, Neil

arXiv.org Machine Learning

We consider the multi-armed bandit setting with a twist. Rather than having just one decision maker deciding which arm to pull in each round, we have $n$ different decision makers (agents). In the simple stochastic setting we show that one of the agents (called the free rider), who has access to the history of other agents playing some zero regret algorithm can achieve just $O(1)$ regret, as opposed to the regret lower bound of $\Omega (\log T)$ when one decision maker is playing in isolation. In the linear contextual setting, we show that if the other agents play a particular, popular zero regret algorithm (UCB), then the free rider can again achieve $O(1)$ regret. In order to prove this result, we give a deterministic lower bound on the number of times each suboptimal arm must be pulled in UCB. In contrast, we show that the free-rider cannot beat the standard single-player regret bounds in certain partial information settings.